2 research outputs found

    Modeling, Control and Automatic Code Generation for a Two-Wheeled Self-Balancing Vehicle Using Modelica

    Get PDF
    The main goal of this project was to use the Modelica features on embedded systems, real-time systems and basic mechanical modeling for the control of a two-wheeled self-balancing personal vehicle. The Elektor Wheelie, a Segway-like vehicle, was selected as the process to control. Modelica is an object-oriented language aimed at modeling of complex systems. The work in the thesis used the Modelica-based modeling and simulation tool Dymola. The Elektor Wheelie has an 8-bit programmable microcontroller (Atmega32) which was used as control unit. This microcontroller has no hardware support for floating point arithmetic operations and emulation via software has a high cost in processor time. Therefore fixed-point representation of real values was used as it only requires integer operations. In order to obtain a linear representation which was useful in the control design a simple mechanical model of the vehicle was created using Dymola. The control strategy was a linear quadratic regulator (LQR) based on a state space representation of the vehicle. Two methods to estimate the platform tilt angle were tested: a complementary filter and a Kalman filter. The Kalman filter had a better performance estimating the platform tilt angle and removing the gyroscope drift from the angular velocity signal. The state estimators as well as the controller task were generated automatically using Dymola; the same tasks were programmed manually using fixed-point arithmetic in order to evaluate the feasibility of the Dymola automatically generated code. At this stage, it was shown that automatically generated fixed-point code had similar results compared to manual coding after slight modifications were made. Finally, a simple communication application was created which allowed real-time plotting of state variables and remote controlling of the vehicle, using elements of Modelica EmbeddedSystems library

    TerrÀngkartlÀggning för autonoma fordon

    No full text
    Autonomous vehicles have become the forefront of the automotive industry nowadays, looking to have safer and more efficient transportation systems. One of the main issues for every autonomous vehicle consists in being aware of its position and the presence of obstacles along its path. The current project addresses the pose and terrain mapping problem integrating a visual odometry method and a mapping technique. An RGB-D camera, the Kinect v2 from Microsoft, was chosen as sensor for capturing information from the environment. It was connected to an Intel mini-PC for real-time processing. Both pieces of hardware were mounted on-board of a four-wheeled research concept vehicle (RCV) to test the feasibility of the current solution at outdoor locations. The Robot Operating System (ROS) was used as development environment with C++ as programming language. The visual odometry strategy consisted in a frame registration algorithm called Adaptive Iterative Closest Keypoint (AICK) based on Iterative Closest Point (ICP) using Oriented FAST and Rotated BRIEF (ORB) as image keypoint extractor. A grid-based local costmap rolling window type was implemented to have a two-dimensional representation of the obstacles close to the vehicle within a predefined area, in order to allow further path planning applications. Experiments were performed both offline and in real-time to test the system at indoors and outdoors scenarios. The results confirmed the viability of using the designed framework to keep tracking the pose of the camera and detect objects in indoor environments. However, outdoor environments evidenced the limitations of the features of the RGB-D sensor, making the current system configuration unfeasible for outdoor purposes.Autonoma fordon har blivit spetsen för bilindustrin i dag i sökandet efter sÀkrare och effektivare transportsystem. En av de viktigaste sakerna för varje autonomt fordon bestÄr i att vara medveten om sin position och nÀrvaron av hinder lÀngs vÀgen. Det aktuella projektet behandlar position och riktning samt terrÀngkartlÀggningsproblemet genom att integrera en visuell distansmÀtnings och kartlÀggningsmetod. RGB-D kameran Kinect v2 frÄn Microsoft valdes som sensor för att samla in information frÄn omgivningen. Den var ansluten till en Intel mini PC för realtidsbehandling. BÄda komponenterna monterades pÄ ett fyrhjuligt forskningskonceptfordon (RCV) för att testa genomförbarheten av den nuvarande lösningen i utomhusmiljöer. Robotoperativsystemet (ROS) anvÀndes som utvecklingsmiljö med C++ som programmeringssprÄk. Den visuella distansmÀtningsstrategin bestod i en bildregistrerings-algoritm som kallas Adaptive Iterative Closest Keypoint (AICK) baserat pÄ Iterative Closest Point (ICP) med hjÀlp av Oriented FAST och Rotated BRIEF (ORB) som nyckelpunktsutvinning frÄn bilder. En rutnÀtsbaserad lokalkostnadskarta av rullande-fönster-typ implementerades för att fÄ en tvÄdimensionell representation av de hinder som befinner sig nÀra fordonet inom ett fördefinierat omrÄde, i syfte att möjliggöra ytterligare applikationer för körvÀgen. Experiment utfördes bÄde offline och i realtid för att testa systemet i inomhus- och utomhusscenarier. Resultaten bekrÀftade möjligheten att anvÀnda den utvecklade metoden för att spÄra position och riktning av kameran samt upptÀcka föremÄl i inomhusmiljöer. Men utomhus visades begrÀnsningar i RGB-D-sensorn som gör att den aktuella systemkonfigurationen Àr vÀrdelös för utomhusbruk
    corecore